最近,变形金刚在各种视觉任务中表现出了有希望的表现。变压器设计中的一个挑战性问题是,全球自我注意力非常昂贵,尤其是对于高分辨率视觉任务。局部自我注意力在局部区域内执行注意力计算以提高其效率,从而导致其在单个注意力层中的接受场不够大,从而导致上下文建模不足。在观察场景时,人类通常集中在局部区域,同时在粗粒度下参加非注意区域。基于这一观察结果,我们开发了轴向扩展的窗口自我发注意机制,该机制在局部窗口内执行精细颗粒的自我注意力,并在水平和垂直轴上进行粗粒度的自我注意力,因此可以有效地捕获短 - 远程视觉依赖性。
translated by 谷歌翻译
我们提出了自我监督的隐式注意力(SSIA),这是一种新方法,可以适应性地指导深度神经网络模型,以通过利用模型本身的特性来吸引注意力。 SSIA是一种新颖的注意机制,在推理过程中不需要任何额外的参数,计算或内存访问成本,这与现有的注意机制相反。简而言之,通过将注意力重量视为高级语义信息,我们重新考虑了现有注意机制的实现,并进一步提出了从较高网络层中生成监督信号,以指导较低的网络层以进行参数更新。我们通过使用网络本身的层次特征来构建自我监督的学习任务,从而实现了这一目标,该任务仅在培训阶段起作用。为了验证SSIA的有效性,我们在卷积神经网络模型中执行了特定的实现(称为SSIA块),并在几个图像分类数据集上验证了它。实验结果表明,SSIA块可以显着改善模型性能,即使胜过许多流行的注意方法,这些方法需要其他参数和计算成本,例如挤压和激发和卷积障碍物注意模块。我们的实施将在GitHub上获得。
translated by 谷歌翻译
位置编码对于视觉变压器(VIT)捕获输入图像的空间结构很重要。一般疗效已在VIT中得到证明。在我们的工作中,我们建议训练VIT以识别输入图像贴片的2D位置编码,这项显然简单的任务实际上产生了有意义的自我研究任务。基于对VIT位置编码的先前工作,我们提出了两个专用于2D图像的位置标签,包括绝对位置和相对位置。我们的位置标签可以轻松地插入变压器中,并结合各种当前VIT变体。它可以通过两种方式工作:1。作为Vanilla Vit(例如VIT-B和SWIN-B)的辅助培训目标,以提高模型性能。 2.结合自我监督的vit(例如,MAE),为语义特征学习提供了更强大的自我监督信号。实验表明,仅由于提出的自我监督方法,Swin-B和Vit-B分别在Mini-Imagenet上获得了1.9%(TOP-1 ACC)和5.6%(TOP-1 ACC)的改善。
translated by 谷歌翻译
Since higher-order tensors are naturally suitable for representing multi-dimensional data in real-world, e.g., color images and videos, low-rank tensor representation has become one of the emerging areas in machine learning and computer vision. However, classical low-rank tensor representations can only represent data on finite meshgrid due to their intrinsical discrete nature, which hinders their potential applicability in many scenarios beyond meshgrid. To break this barrier, we propose a low-rank tensor function representation (LRTFR), which can continuously represent data beyond meshgrid with infinite resolution. Specifically, the suggested tensor function, which maps an arbitrary coordinate to the corresponding value, can continuously represent data in an infinite real space. Parallel to discrete tensors, we develop two fundamental concepts for tensor functions, i.e., the tensor function rank and low-rank tensor function factorization. We theoretically justify that both low-rank and smooth regularizations are harmoniously unified in the LRTFR, which leads to high effectiveness and efficiency for data continuous representation. Extensive multi-dimensional data recovery applications arising from image processing (image inpainting and denoising), machine learning (hyperparameter optimization), and computer graphics (point cloud upsampling) substantiate the superiority and versatility of our method as compared with state-of-the-art methods. Especially, the experiments beyond the original meshgrid resolution (hyperparameter optimization) or even beyond meshgrid (point cloud upsampling) validate the favorable performances of our method for continuous representation.
translated by 谷歌翻译
显式低级正则化,例如核规范的正则化已被广泛用于成像科学。但是,已经发现,在各种图像处理任务中,隐式正规化优于明确的正规化。另一个问题是,固定的显式正则化将适用性限制为广泛图像,因为不同的图像偏爱不同的显式正则化捕获的不同特征。因此,本文提出了一种新的自适应和隐式低级别正则化,从训练数据中动态捕获了较低的先验。我们新的自适应和隐式低级别正则化的核心是在基于Dirichlet Energy的正则化中参数化Laplacian矩阵,我们称之为正则化空气。从理论上讲,我们表明\ retwo {air}的自适应正则化增强了训练结束时的隐式正则化和消失。我们验证了空气对各种基准任务的有效性,表明空气对缺失条目不均匀的情况特别有利。该代码可以在https://github.com/lizhemin15/air-net上找到。
translated by 谷歌翻译
可变形的图像注册提供了有关图像的动态信息,并且在医学图像分析中至关重要。但是,由于单个时期脑MR图像和多阶梯超声心动图的不同特征,因此很难使用相同的算法或模型准确地注册它们。我们提出了一个无监督的多尺度相关性迭代注册网络(SearchMorph),该模型具有三个亮点。 (1)我们引入了成本量来加强特征相关性和构造的相关金字塔以补充多尺度相关信息。 (2)我们设计了搜索模块来搜索多尺度金字塔中功能的注册。 (3)我们使用GRU模块进行变形场的迭代细化。本文提出的网络显示了在常见的单个时间段登记任务中的领导,并解决了多时间运动估计任务。实验结果表明,我们提出的方法比最新方法获得了更高的注册精度和更低的折叠点比。
translated by 谷歌翻译
明确的低级正则化,例如核规范规则,已广泛用于成像科学。但是,已经发现隐式正则化优于各种图像处理任务中的明确正则化。另一个问题是,固定的显式正则化将适用性限制为广泛的图像,因为不同的图像有利于使用不同的显式规则化捕获的不同特征。因此,本文提出了一种新的自适应和隐式低级正则化,其从训练数据动态地捕获低秩。在我们新的自适应和隐式低级正则化的核心,正在使用神经网络参数化Laplacian矩阵,并通过神经网络调用所提出的型号\ Textit {Air-Net}。从理论上讲,我们表明,空气网的自适应正规化增强了隐含的正则化并在培训结束时消失。我们验证了对各种基准任务对各种基准任务的效果,显示空中网对缺失条目不均匀时的情况尤为好评。可以在\ href {https://github.com/lizhemin15/airair-net}} {https://github.com/lizhemin15/airair-net}。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译